分销(OOD)检测对于在现实世界中部署机器学习模型是重要的,其中来自移位分布的测试数据可以自然地出现。虽然最近出现了何种算法方法,但何种算法检测,临界差距仍然存在理论上。在这项工作中,我们开发了一个分析框架,其特征,并统一了对OOD检测的理论理解。我们的分析框架激励了一种新颖的电子网络,创业板的检测方法,展示了理论和经验的优势。特别是,在CIFAR-100作为分布数据中,我们的方法优于竞争性基线16.57%(FPR95)。最后,我们正式提供可证明的保证和对我们的方法进行全面分析,支撑数据分布的各种性能如何影响OOD检测的性能。
translated by 谷歌翻译
Numerous models have tried to effectively embed knowledge graphs in low dimensions. Among the state-of-the-art methods, Graph Neural Network (GNN) models provide structure-aware representations of knowledge graphs. However, they often utilize the information of relations and their interactions with entities inefficiently. Moreover, most state-of-the-art knowledge graph embedding models suffer from scalability issues because of assigning high-dimensional embeddings to entities and relations. To address the above limitations, we propose a scalable general knowledge graph encoder that adaptively involves a powerful tensor decomposition method in the aggregation function of RGCN, a well-known relational GNN model. Specifically, the parameters of a low-rank core projection tensor, used to transform neighborhood entities in the encoder, are shared across relations to benefit from multi-task learning and incorporate relations information effectively. Besides, we propose a low-rank estimation of the core tensor using CP decomposition to compress the model, which is also applicable, as a regularization method, to other similar linear models. We evaluated our model on knowledge graph completion as a common downstream task. We train our model for using a new loss function based on contrastive learning, which relieves the training limitation of the 1-N method on huge graphs. We improved RGCN performance on FB15-237 by 0.42% with considerably lower dimensionality of embeddings.
translated by 谷歌翻译
Diffusion Probabilistic Models (DPMs) have recently been employed for image deblurring. DPMs are trained via a stochastic denoising process that maps Gaussian noise to the high-quality image, conditioned on the concatenated blurry input. Despite their high-quality generated samples, image-conditioned Diffusion Probabilistic Models (icDPM) rely on synthetic pairwise training data (in-domain), with potentially unclear robustness towards real-world unseen images (out-of-domain). In this work, we investigate the generalization ability of icDPMs in deblurring, and propose a simple but effective guidance to significantly alleviate artifacts, and improve the out-of-distribution performance. Particularly, we propose to first extract a multiscale domain-generalizable representation from the input image that removes domain-specific information while preserving the underlying image structure. The representation is then added into the feature maps of the conditional diffusion model as an extra guidance that helps improving the generalization. To benchmark, we focus on out-of-distribution performance by applying a single-dataset trained model to three external and diverse test sets. The effectiveness of the proposed formulation is demonstrated by improvements over the standard icDPM, as well as state-of-the-art performance on perceptual quality and competitive distortion metrics compared to existing methods.
translated by 谷歌翻译
We consider the constrained Linear Inverse Problem (LIP), where a certain atomic norm (like the $\ell_1 $ and the Nuclear norm) is minimized subject to a quadratic constraint. Typically, such cost functions are non-differentiable which makes them not amenable to the fast optimization methods existing in practice. We propose two equivalent reformulations of the constrained LIP with improved convex regularity: (i) a smooth convex minimization problem, and (ii) a strongly convex min-max problem. These problems could be solved by applying existing acceleration based convex optimization methods which provide better \mmode{ O \left( \nicefrac{1}{k^2} \right) } theoretical convergence guarantee. However, to fully exploit the utility of these reformulations, we also provide a novel algorithm, to which we refer as the Fast Linear Inverse Problem Solver (FLIPS), that is tailored to solve the reformulation of the LIP. We demonstrate the performance of FLIPS on the sparse coding problem arising in image processing tasks. In this setting, we observe that FLIPS consistently outperforms the Chambolle-Pock and C-SALSA algorithms--two of the current best methods in the literature.
translated by 谷歌翻译
Recent semi-supervised and self-supervised methods have shown great success in the image and text domain by utilizing augmentation techniques. Despite such success, it is not easy to transfer this success to tabular domains. It is not easy to adapt domain-specific transformations from image and language to tabular data due to mixing of different data types (continuous data and categorical data) in the tabular domain. There are a few semi-supervised works on the tabular domain that have focused on proposing new augmentation techniques for tabular data. These approaches may have shown some improvement on datasets with low-cardinality in categorical data. However, the fundamental challenges have not been tackled. The proposed methods either do not apply to datasets with high-cardinality or do not use an efficient encoding of categorical data. We propose using conditional probability representation and an efficient progressively feature upgrading framework to effectively learn representations for tabular data in semi-supervised applications. The extensive experiments show superior performance of the proposed framework and the potential application in semi-supervised settings.
translated by 谷歌翻译
This paper presents a multi-agent Deep Reinforcement Learning (DRL) framework for autonomous control and integration of renewable energy resources into smart power grid systems. In particular, the proposed framework jointly considers demand response (DR) and distributed energy management (DEM) for residential end-users. DR has a widely recognized potential for improving power grid stability and reliability, while at the same time reducing end-users energy bills. However, the conventional DR techniques come with several shortcomings, such as the inability to handle operational uncertainties while incurring end-user disutility, which prevents widespread adoption in real-world applications. The proposed framework addresses these shortcomings by implementing DR and DEM based on real-time pricing strategy that is achieved using deep reinforcement learning. Furthermore, this framework enables the power grid service provider to leverage distributed energy resources (i.e., PV rooftop panels and battery storage) as dispatchable assets to support the smart grid during peak hours, thus achieving management of distributed energy resources. Simulation results based on the Deep Q-Network (DQN) demonstrate significant improvements of the 24-hour accumulative profit for both prosumers and the power grid service provider, as well as major reductions in the utilization of the power grid reserve generators.
translated by 谷歌翻译
DeepAngle is a machine learning-based method to determine the contact angles of different phases in the tomography images of porous materials. Measurement of angles in 3--D needs to be done within the surface perpendicular to the angle planes, and it could become inaccurate when dealing with the discretized space of the image voxels. A computationally intensive solution is to correlate and vectorize all surfaces using an adaptable grid, and then measure the angles within the desired planes. On the contrary, the present study provides a rapid and low-cost technique powered by deep learning to estimate the interfacial angles directly from images. DeepAngle is tested on both synthetic and realistic images against the direct measurement technique and found to improve the r-squared by 5 to 16% while lowering the computational cost 20 times. This rapid method is especially applicable for processing large tomography data and time-resolved images, which is computationally intensive. The developed code and the dataset are available at an open repository on GitHub (https://www.github.com/ArashRabbani/DeepAngle).
translated by 谷歌翻译
Many existing datasets for lidar place recognition are solely representative of structured urban environments, and have recently been saturated in performance by deep learning based approaches. Natural and unstructured environments present many additional challenges for the tasks of long-term localisation but these environments are not represented in currently available datasets. To address this we introduce Wild-Places, a challenging large-scale dataset for lidar place recognition in unstructured, natural environments. Wild-Places contains eight lidar sequences collected with a handheld sensor payload over the course of fourteen months, containing a total of 67K undistorted lidar submaps along with accurate 6DoF ground truth. Our dataset contains multiple revisits both within and between sequences, allowing for both intra-sequence (i.e. loop closure detection) and inter-sequence (i.e. re-localisation) place recognition. We also benchmark several state-of-the-art approaches to demonstrate the challenges that this dataset introduces, particularly the case of long-term place recognition due to natural environments changing over time. Our dataset and code will be available at https://csiro-robotics.github.io/Wild-Places.
translated by 谷歌翻译
与其2D图像对应物相比,3D点云数据上的零射击学习是一个相关的未置换问题。 3D数据由于不可用的预训练特征提取模型而带来了ZSL的新挑战。为了解决这个问题,我们提出了一种及时引导的3D场景生成和监督方法,该方法可以增强3D数据以更好地学习网络,从而探索可见和看不见的对象的复杂相互作用。首先,我们以提示描述的某些方式合并了两个3D模型的点云。提示的行为就像描述每个3D场景的注释一样。后来,我们进行对比学习,以端到端的方式培训我们所提出的建筑。我们认为,与单​​个对象相比,3D场景可以更有效地关联对象,因为当对象出现在上下文中时,流行的语言模型(如Bert)可以实现高性能。我们提出的及时引导场景生成方法封装了数据扩展和基于及时的注释/字幕,以提高3D ZSL性能。我们已经在合成(ModelNet40,ModelNet10)和实扫描(ScanoJbectnn)3D对象数据集上实现了最新的ZSL和广义ZSL性能。
translated by 谷歌翻译
我们考虑了从一个示例轨迹中学习$ dx_t = f(x_t)dt+sigma(x_t)dw_t $的形式的随机微分方程的问题。这个问题比学习确定性动力学系统更具挑战性,因为一个示例轨迹仅提供有关未知功能$ f $,$ \ sigma $的间接信息,而随机过程$ dw_t $代表漂移,扩散和随机强迫术语,强迫术语,,分别。我们为此问题提出了一个简单的基于内核的解决方案,可以分解如下:(1)表示时间添加映射$ x_t \ rightarrow x_ {t+dt} $作为计算图,其中$ f $,$ \ \ Sigma $和$ DW_T $作为未知功能和随机变量出现。 (2)通过在未知函数上使用高斯过程(GP)先验的最大后验估计(给定数据)来完成图(近似未知的函数和随机变量)。 (3)从具有随机交叉验证的数据中学习GP先验的协方差函数(内核)。数值实验说明了我们方法的功效,鲁棒性和范围。
translated by 谷歌翻译